255 research outputs found

    Real-time on-board pedestrian detection using generic single-stage algorithms and on-road databases

    Full text link
    [EN] Pedestrian detection is a particular case of object detection that helps to reduce accidents in advanced driver-assistance systems and autonomous vehicles. It is not an easy task because of the variability of the objects and the time constraints. A performance comparison of object detection methods, including both GPU and non-GPU implementations over a variety of on-road specific databases, is provided. Computer vision multi-class object detection can be integrated on sensor fusion modules where recall is preferred over precision. For this reason, ad hoc training with a single class for pedestrians has been performed and we achieved a significant increase in recall. Experiments have been carried out on several architectures and a special effort has been devoted to achieve a feasible computational time for a real-time system. Finally, an analysis of the input image size allows to fine-tune the model and get better results with practical costs.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by PRYSTINE project which had received funding within the Electronic Components and Systems for European Leadership Joint Undertaking (ECSEL JU) in collaboration with the European Union's H2020 Framework Programme and National Authorities, under grant agreement no. 783190. It was also funded by Generalitat Valenciana through the Instituto Valenciano de Competitividad Empresarial (IVACE).Ortiz, V.; Del Tejo Catala, O.; Salvador Igual, I.; Perez-Cortes, J. (2020). Real-time on-board pedestrian detection using generic single-stage algorithms and on-road databases. International Journal of Advanced Robotic Systems. 17(5). https://doi.org/10.1177/1729881420929175S175Zhang, S., Benenson, R., Omran, M., Hosang, J., & Schiele, B. (2018). Towards Reaching Human Performance in Pedestrian Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 973-986. doi:10.1109/tpami.2017.2700460Viola, P., Jones, M. J., & Snow, D. (2005). Detecting Pedestrians Using Patterns of Motion and Appearance. International Journal of Computer Vision, 63(2), 153-161. doi:10.1007/s11263-005-6644-8Dollar, P., Appel, R., Belongie, S., & Perona, P. (2014). Fast Feature Pyramids for Object Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(8), 1532-1545. doi:10.1109/tpami.2014.2300479Dollar, P., Wojek, C., Schiele, B., & Perona, P. (2012). Pedestrian Detection: An Evaluation of the State of the Art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(4), 743-761. doi:10.1109/tpami.2011.155Munder, S., & Gavrila, D. M. (2006). An Experimental Study on Pedestrian Classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(11), 1863-1868. doi:10.1109/tpami.2006.217Enzweiler, M., & Gavrila, D. M. (2009). Monocular Pedestrian Detection: Survey and Experiments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(12), 2179-2195. doi:10.1109/tpami.2008.260He, K., Zhang, X., Ren, S., & Sun, J. (2015). Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9), 1904-1916. doi:10.1109/tpami.2015.2389824McGehee, D. V., Mazzae, E. N., & Baldwin, G. H. S. (2000). Driver Reaction Time in Crash Avoidance Research: Validation of a Driving Simulator Study on a Test Track. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 44(20), 3-320-3-323. doi:10.1177/15419312000440202

    Probabilistic Evaluation of 3D Surfaces Using Statistical Shape Models (SSM)

    Full text link
    [EN] Inspecting a 3D object which shape has elastic manufacturing tolerances in order to find defects is a challenging and time-consuming task. This task usually involves humans, either in the specification stage followed by some automatic measurements, or in other points along the process. Even when a detailed inspection is performed, the measurements are limited to a few dimensions instead of a complete examination of the object. In this work, a probabilistic method to evaluate 3D surfaces is presented. This algorithm relies on a training stage to learn the shape of the object building a statistical shape model. Making use of this model, any inspected object can be evaluated obtaining a probability that the whole object or any of its dimensions are compatible with the model, thus allowing to easily find defective objects. Results in simulated and real environments are presented and compared to two different alternatives.This work was partially funded by Generalitat Valenciana through IVACE (Valencian Institute of Business Competitiveness) distributed nominatively to Valencian technological innovation centres under project expedient IMAMCN/2020/1.Pérez, J.; Guardiola Garcia, JL.; Pérez Jiménez, AJ.; Perez-Cortes, J. (2020). Probabilistic Evaluation of 3D Surfaces Using Statistical Shape Models (SSM). Sensors. 20(22):1-16. https://doi.org/10.3390/s20226554S1162022Brosed, F. J., Aguilar, J. J., Guillomía, D., & Santolaria, J. (2010). 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot. Sensors, 11(1), 90-110. doi:10.3390/s110100090Perez-Cortes, J.-C., Perez, A., Saez-Barona, S., Guardiola, J.-L., & Salvador, I. (2018). A System for In-Line 3D Inspection without Hidden Surfaces. Sensors, 18(9), 2993. doi:10.3390/s18092993Bi, Z. M., & Wang, L. (2010). Advances in 3D data acquisition and processing for industrial applications. Robotics and Computer-Integrated Manufacturing, 26(5), 403-413. doi:10.1016/j.rcim.2010.03.003Fu, K., Peng, J., He, Q., & Zhang, H. (2020). Single image 3D object reconstruction based on deep learning: A review. Multimedia Tools and Applications, 80(1), 463-498. doi:10.1007/s11042-020-09722-8Pichat, J., Iglesias, J. E., Yousry, T., Ourselin, S., & Modat, M. (2018). A Survey of Methods for 3D Histology Reconstruction. Medical Image Analysis, 46, 73-105. doi:10.1016/j.media.2018.02.004Pathak, V. K., Singh, A. K., Sivadasan, M., & Singh, N. K. (2016). Framework for Automated GD&T Inspection Using 3D Scanner. Journal of The Institution of Engineers (India): Series C, 99(2), 197-205. doi:10.1007/s40032-016-0337-7Bustos, B., Keim, D. A., Saupe, D., Schreck, T., & Vranić, D. V. (2005). Feature-based similarity search in 3D object databases. ACM Computing Surveys, 37(4), 345-387. doi:10.1145/1118890.1118893Mian, A., Bennamoun, M., & Owens, R. (2009). On the Repeatability and Quality of Keypoints for Local Feature-based 3D Object Retrieval from Cluttered Scenes. International Journal of Computer Vision, 89(2-3), 348-361. doi:10.1007/s11263-009-0296-zLiu, Z., Zhao, C., Wu, X., & Chen, W. (2017). An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors. Sensors, 17(3), 451. doi:10.3390/s17030451Barra, V., & Biasotti, S. (2013). 3D shape retrieval using Kernels on Extended Reeb Graphs. Pattern Recognition, 46(11), 2985-2999. doi:10.1016/j.patcog.2013.03.019Xie, J., Dai, G., Zhu, F., Wong, E. K., & Fang, Y. (2017). DeepShape: Deep-Learned Shape Descriptor for 3D Shape Retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(7), 1335-1345. doi:10.1109/tpami.2016.2596722Lague, D., Brodu, N., & Leroux, J. (2013). Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z). ISPRS Journal of Photogrammetry and Remote Sensing, 82, 10-26. doi:10.1016/j.isprsjprs.2013.04.009Cook, K. L. (2017). An evaluation of the effectiveness of low-cost UAVs and structure from motion for geomorphic change detection. Geomorphology, 278, 195-208. doi:10.1016/j.geomorph.2016.11.009Martínez-Carricondo, P., Agüera-Vega, F., Carvajal-Ramírez, F., Mesas-Carrascosa, F.-J., García-Ferrer, A., & Pérez-Porras, F.-J. (2018). Assessment of UAV-photogrammetric mapping accuracy based on variation of ground control points. International Journal of Applied Earth Observation and Geoinformation, 72, 1-10. doi:10.1016/j.jag.2018.05.015Burdziakowski, P., Specht, C., Dabrowski, P. S., Specht, M., Lewicka, O., & Makar, A. (2020). Using UAV Photogrammetry to Analyse Changes in the Coastal Zone Based on the Sopot Tombolo (Salient) Measurement Project. Sensors, 20(14), 4000. doi:10.3390/s20144000MARDIA, K. V., & DRYDEN, I. L. (1989). The statistical analysis of shape data. Biometrika, 76(2), 271-281. doi:10.1093/biomet/76.2.271Heimann, T., & Meinzer, H.-P. (2009). Statistical shape models for 3D medical image segmentation: A review. Medical Image Analysis, 13(4), 543-563. doi:10.1016/j.media.2009.05.004Ambellan, F., Tack, A., Ehlke, M., & Zachow, S. (2019). Automated segmentation of knee bone and cartilage combining statistical shape knowledge and convolutional neural networks: Data from the Osteoarthritis Initiative. Medical Image Analysis, 52, 109-118. doi:10.1016/j.media.2018.11.009Avendi, M. R., Kheradvar, A., & Jafarkhani, H. (2016). A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI. Medical Image Analysis, 30, 108-119. doi:10.1016/j.media.2016.01.005Booth, J., Roussos, A., Ponniah, A., Dunaway, D., & Zafeiriou, S. (2017). Large Scale 3D Morphable Models. International Journal of Computer Vision, 126(2-4), 233-254. doi:10.1007/s11263-017-1009-7Erus, G., Zacharaki, E. I., & Davatzikos, C. (2014). Individualized statistical learning from medical image databases: Application to identification of brain lesions. Medical Image Analysis, 18(3), 542-554. doi:10.1016/j.media.2014.02.00

    I3D, un sistema de inspección industrial 3D sin contacto

    Full text link
    Boletín Informativo del Instituto Tecnológico de Informática, dedicado a las Tecnologías de la información y las ComunicacionesTrabajo parcialmente subvencionado por el Instituto de la Pequeña y Mediana Industria de la Generalitat Valenciana (IMPIVA) y la Unión Europea por medio del Fondo Europeo de Desarrollo Regional (FEDER) en el marco del Programa I+D para Institutos Tecnológicos de la red IMPIVA (2010: IMIDIC-2010/194), (2009: IMIDIC-2009/197), (2008: IMIDIC-2008/133).Salvador Igual, I.; Carrión Robles, D.; Perez-Cortes, J.; Sáez Barona, S. (2011). I3D, un sistema de inspección industrial 3D sin contacto. Actualidad TIC. (17):19-20. http://hdl.handle.net/10251/46126S19201

    Global parenchymal texture features based on histograms of oriented gradients improve cancer development risk estimation from healthy breasts

    Full text link
    [EN] Background The breast dense tissue percentage on digital mammograms is one of the most commonly used markers for breast cancer risk estimation. Geometric features of dense tissue over the breast and the presence of texture structures contained in sliding windows that scan the mammograms may improve the predictive ability when combined with the breast dense tissue percentage. Methods A case/control study nested within a screening program covering 1563 women with craniocaudal and mediolateral-oblique mammograms (755 controls and the contralateral breast mammograms at the closest screening visit before cancer diagnostic for 808 cases) aging 45 to 70 from Comunitat Valenciana (Spain) was used to extract geometric and texture features. The dense tissue segmentation was performed using DMScan and validated by two experienced radiologists. A model based on Random Forests was trained several times varying the set of variables. A training dataset of 1172 patients was evaluated with a 10-stratified-fold cross-validation scheme. The area under the Receiver Operating Characteristic curve (AUC) was the metric for the predictive ability. The results were assessed by only considering the output after applying the model to the test set, which was composed of the remaining 391 patients. Results The AUC score obtained by the dense tissue percentage (0.55) was compared to a machine learning-based classifier results. The classifier, apart from the percentage of dense tissue of both views, firstly included global geometric features such as the distance of dense tissue to the pectoral muscle, dense tissue eccentricity or the dense tissue perimeter, obtaining an accuracy of 0.56. By the inclusion of a global feature based on local histograms of oriented gradients, the accuracy of the classifier was significantly improved (0.61). The number of well-classified patients was improved up to 236 when it was 208. Conclusion Relative geometric features of dense tissue over the breast and histograms of standardized local texture features based on sliding windows scanning the whole breast improve risk prediction beyond the dense tissue percentage adjusted by geometrical variables. Other classifiers could improve the results obtained by the conventional Random Forests used in this study.This work was partially funded by Generalitat Valenciana through I+D IVACE (Valencian Institute of Business Competitiviness) and GVA (European Regional Development Fund) supports under the project IMAMCN/2018/1, and by Carlos III Institute of Health under the project DTS15/00080Pérez-Benito, FJ.; Signol, F.; Perez-Cortes, J.; Pollán, M.; Perez-Gómez, B.; Salas-Trejo, D.; Casals, M.... (2019). Global parenchymal texture features based on histograms of oriented gradients improve cancer development risk estimation from healthy breasts. Computer Methods and Programs in Biomedicine. 177:123-132. https://doi.org/10.1016/j.cmpb.2019.05.022S12313217

    Breast Dense Tissue Segmentation with Noisy Labels: A Hybrid Threshold-Based and Mask-Based Approach

    Get PDF
    Breast density assessed from digital mammograms is a known biomarker related to a higher risk of developing breast cancer. Supervised learning algorithms have been implemented to determine this. However, the performance of these algorithms depends on the quality of the ground-truth information, which expert readers usually provide. These expert labels are noisy approximations to the ground truth, as there is both intra- and inter-observer variability among them. Thus, it is crucial to provide a reliable method to measure breast density from mammograms. This paper presents a fully automated method based on deep learning to estimate breast density, including breast detection, pectoral muscle exclusion, and dense tissue segmentation. We propose a novel confusion matrix (CM)-YNet model for the segmentation step. This architecture includes networks to model each radiologist's noisy label and gives the estimated ground-truth segmentation as well as two parameters that allow interaction with a threshold-based labeling tool. A multi-center study involving 1785 women whose "for presentation" mammograms were obtained from 11 different medical facilities was performed. A total of 2496 mammograms were used as the training corpus, and 844 formed the testing corpus. Additionally, we included a totally independent dataset from a different center, composed of 381 women with one image per patient. Each mammogram was labeled independently by two expert radiologists using a threshold-based tool. The implemented CM-Ynet model achieved the highest DICE score averaged over both test datasets (0.82±0.14) when compared to the closest dense-tissue segmentation assessment from both radiologists. The level of concordance between the two radiologists showed a DICE score of 0.76±0.17. An automatic breast density estimator based on deep learning exhibited higher performance when compared with two experienced radiologists. This suggests that modeling each radiologist's label allows for better estimation of the unknown ground-truth segmentation. The advantage of the proposed model is that it also provides the threshold parameters that enable user interaction with a threshold-based tool.This research was partially funded by Generalitat Valenciana through IVACE (Valencian Institute of Business Competitiveness) distributed by nomination to Valencian technological innovation centres under project expedient IMDEEA/2021/100. It was also supported by grants from Instituto de Salud Carlos III FEDER (PI17/00047).S

    Extended a Priori Probability (EAPP): A Data-Driven Approach for Machine Learning Binary Classification Tasks

    Full text link
    [EN] The a priori probability of a dataset is usually used as a baseline for comparing a particular algorithm's accuracy in a given binary classification task. ZeroR is the simplest algorithm for this, predicting the majority class for all examples. However, this is an extremely simple approach that has no predictive power and does not describe other dataset features that could lead to a more demanding baseline. In this paper, we present the Extended A Priori Probability (EAPP), a novel semi-supervised baseline metric for binary classification tasks that considers not only the a priori probability but also some possible bias present in the dataset as well as other features that could provide a relatively trivial separability of the target classes. The approach is based on the area under the ROC curve (AUC ROC), known to be quite insensitive to class imbalance. The procedure involves multiobjective feature extraction and a clustering stage in the input space with autoencoders and a subsequent combinatory weighted assignation from clusters to classes depending on the distance to nearest clusters for each class. Class labels are then assigned to establish the combination that maximizes AUC ROC for each number of clusters considered. To avoid overfit in the combined feature extraction and clustering method, a cross-validation scheme is performed in each case. EAPP is defined for different numbers of clusters, starting from the inverse of the minority class proportion, which is useful for a fair comparison among diversely imbalanced datasets. A high EAPP usually relates to an easy binary classification task, but it also may be due to a significant coarse-grained bias in the dataset, when the task is previously known to be difficult. This metric represents a baseline beyond the a priori probability to assess the actual capabilities of binary classification models.This work was supported in part by the Generalitat Valenciana through the Valencian Institute of Business Competitiveness (IVACE) Distributed Nominatively to Valencian Technological Innovation Centers under Project IMAMCN/2021/1, in part by the Cervera Network of Excellence Project in Data-Based Enabling Technologies (AI4ES) Co-Funded by the Centre for Industrial and Technological Development¿E. P. E. (CDTI), and in part by the European Union through the Next Generation EU Fund within the Cervera Aids Program for Technological Centers under Project CER-20211030.Ortiz, V.; Pérez-Benito, FJ.; Del Tejo Catalá, O.; Salvador Igual, I.; Llobet Azpitarte, R.; Perez-Cortes, J. (2022). Extended a Priori Probability (EAPP): A Data-Driven Approach for Machine Learning Binary Classification Tasks. IEEE Access. 10:120074-120085. https://doi.org/10.1109/ACCESS.2022.32219361200741200851

    A deep learning framework to classify breast density with noisy labels regularization

    Get PDF
    Background and objective: Breast density assessed from digital mammograms is a biomarker for higher risk of developing breast cancer. Experienced radiologists assess breast density using the Breast Image and Data System (BI-RADS) categories. Supervised learning algorithms have been developed with this objective in mind, however, the performance of these algorithms depends on the quality of the ground-truth information which is usually labeled by expert readers. These labels are noisy approximations of the ground truth, as there is often intra- and inter-reader variability among labels. Thus, it is crucial to provide a reliable method to obtain digital mammograms matching BI-RADS categories. This paper presents RegL (Labels Regularizer), a methodology that includes different image pre-processes to allow both a correct breast segmentation and the enhancement of image quality through an intensity adjustment, thus allowing the use of deep learning to classify the mammograms into BI-RADS categories. The Confusion Matrix (CM) - CNN network used implements an architecture that models each radiologist's noisy label. The final methodology pipeline was determined after comparing the performance of image pre-processes combined with different DL architectures. Methods: A multi-center study composed of 1395 women whose mammograms were classified into the four BI-RADS categories by three experienced radiologists is presented. A total of 892 mammograms were used as the training corpus, 224 formed the validation corpus, and 279 the test corpus. Results: The combination of five networks implementing the RegL methodology achieved the best results among all the models in the test set. The ensemble model obtained an accuracy of (0.85) and a kappa index of 0.71. Conclusions: The proposed methodology has a similar performance to the experienced radiologists in the classification of digital mammograms into BI-RADS categories. This suggests that the pre-processing steps and modelling of each radiologist's label allows for a better estimation of the unknown ground truth labels.This work was partially funded by Generalitat Valenciana through IVACE (Valencian Institute of Business Competitiveness) distributed nominatively to Valencian technological innovation centres under project expedient IMAMCN/2021/1.S

    A deep learning system to obtain the optimal parameters for a threshold-based breast and dense tissue segmentation

    Full text link
    [EN] Background and Objective: Breast cancer is the most frequent cancer in women. The Spanish healthcare network established population-based screening programs in all Autonomous Communities, where mammograms of asymptomatic women are taken with early diagnosis purposes. Breast density assessed from digital mammograms is a biomarker known to be related to a higher risk to develop breast cancer. It is thus crucial to provide a reliable method to measure breast density from mammograms. Furthermore the complete automation of this segmentation process is becoming fundamental as the amount of mammograms increases every day. Important challenges are related with the differences in images from different devices and the lack of an objective gold standard. This paper presents a fully automated framework based on deep learning to estimate the breast density. The framework covers breast detection, pectoral muscle exclusion, and fibroglandular tissue segmentation. Methods: A multi-center study, composed of 1785 women whose "for presentation" mammograms were segmented by two experienced radiologists. A total of 4992 of the 6680 mammograms were used as training corpus and the remaining (1688) formed the test corpus. This paper presents a histogram normalization step that smoothed the difference between acquisition, a regression architecture that learned segmentation parameters as intrinsic image features and a loss function based on the DICE score. Results: The results obtained indicate that the level of concordance (DICE score) reached by the two radiologists (0.77) was also achieved by the automated framework when it was compared to the closest breast segmentation from the radiologists. For the acquired with the highest quality device, the DICE score per acquisition device reached 0.84, while the concordance between radiologists was 0.76. Conclusions: An automatic breast density estimator based on deep learning exhibits similar performance when compared with two experienced radiologists. It suggests that this system could be used to support radiologists to ease its work.This work was partially funded by Generalitat Valenciana through I+D IVACE (Valencian Institute of Business Competitiviness) and GVA (European Regional Development Fund) supports under the project IMAMCN/2019/1, and by Carlos III Institute of Health under the project DTS15/00080.Perez-Benito, FJ.; Signol, F.; Perez-Cortes, J.; Fuster Bagetto, A.; Pollan, M.; Pérez-Gómez, B.; Salas-Trejo, D.... (2020). A deep learning system to obtain the optimal parameters for a threshold-based breast and dense tissue segmentation. Computer Methods and Programs in Biomedicine. 195:123-132. https://doi.org/10.1016/j.cmpb.2020.105668S123132195Kuhl, C. K. (2015). The Changing World of Breast Cancer. Investigative Radiology, 50(9), 615-628. doi:10.1097/rli.0000000000000166Boyd, N. F., Rommens, J. M., Vogt, K., Lee, V., Hopper, J. L., Yaffe, M. J., & Paterson, A. D. (2005). Mammographic breast density as an intermediate phenotype for breast cancer. The Lancet Oncology, 6(10), 798-808. doi:10.1016/s1470-2045(05)70390-9Assi, V., Warwick, J., Cuzick, J., & Duffy, S. W. (2011). Clinical and epidemiological issues in mammographic density. Nature Reviews Clinical Oncology, 9(1), 33-40. doi:10.1038/nrclinonc.2011.173Oliver, A., Freixenet, J., Marti, R., Pont, J., Perez, E., Denton, E. R. E., & Zwiggelaar, R. (2008). A Novel Breast Tissue Density Classification Methodology. IEEE Transactions on Information Technology in Biomedicine, 12(1), 55-65. doi:10.1109/titb.2007.903514Pérez-Benito, F. J., Signol, F., Pérez-Cortés, J.-C., Pollán, M., Pérez-Gómez, B., Salas-Trejo, D., … LLobet, R. (2019). Global parenchymal texture features based on histograms of oriented gradients improve cancer development risk estimation from healthy breasts. Computer Methods and Programs in Biomedicine, 177, 123-132. doi:10.1016/j.cmpb.2019.05.022Ciatto, S., Houssami, N., Apruzzese, A., Bassetti, E., Brancato, B., Carozzi, F., … Scorsolini, A. (2005). Categorizing breast mammographic density: intra- and interobserver reproducibility of BI-RADS density categories. The Breast, 14(4), 269-275. doi:10.1016/j.breast.2004.12.004Skaane, P. (2009). Studies comparing screen-film mammography and full-field digital mammography in breast cancer screening: Updated review. Acta Radiologica, 50(1), 3-14. doi:10.1080/02841850802563269Van der Waal, D., den Heeten, G. J., Pijnappel, R. M., Schuur, K. H., Timmers, J. M. H., Verbeek, A. L. M., & Broeders, M. J. M. (2015). Comparing Visually Assessed BI-RADS Breast Density and Automated Volumetric Breast Density Software: A Cross-Sectional Study in a Breast Cancer Screening Setting. PLOS ONE, 10(9), e0136667. doi:10.1371/journal.pone.0136667Kim, S. H., Lee, E. H., Jun, J. K., Kim, Y. M., Chang, Y.-W., … Lee, J. H. (2019). Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets. Korean Journal of Radiology, 20(2), 218. doi:10.3348/kjr.2018.0193Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J. T. (2017). Deep learning for healthcare: review, opportunities and challenges. Briefings in Bioinformatics, 19(6), 1236-1246. doi:10.1093/bib/bbx044LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. doi:10.1038/nature14539Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., … Kingsbury, B. (2012). Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Processing Magazine, 29(6), 82-97. doi:10.1109/msp.2012.2205597Wang, J., Chen, Y., Hao, S., Peng, X., & Hu, L. (2019). Deep learning for sensor-based activity recognition: A survey. Pattern Recognition Letters, 119, 3-11. doi:10.1016/j.patrec.2018.02.010Helmstaedter, M., Briggman, K. L., Turaga, S. C., Jain, V., Seung, H. S., & Denk, W. (2013). Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature, 500(7461), 168-174. doi:10.1038/nature12346Lee, K., Turner, N., Macrina, T., Wu, J., Lu, R., & Seung, H. S. (2019). Convolutional nets for reconstructing neural circuits from brain images acquired by serial section electron microscopy. Current Opinion in Neurobiology, 55, 188-198. doi:10.1016/j.conb.2019.04.001Leung, M. K. K., Xiong, H. Y., Lee, L. J., & Frey, B. J. (2014). Deep learning of the tissue-regulated splicing code. Bioinformatics, 30(12), i121-i129. doi:10.1093/bioinformatics/btu277Zhou, J., Park, C. Y., Theesfeld, C. L., Wong, A. K., Yuan, Y., Scheckel, C., … Troyanskaya, O. G. (2019). Whole-genome deep-learning analysis identifies contribution of noncoding mutations to autism risk. Nature Genetics, 51(6), 973-980. doi:10.1038/s41588-019-0420-0Kallenberg, M., Petersen, K., Nielsen, M., Ng, A. Y., Diao, P., Igel, C., … Lillholm, M. (2016). Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring. IEEE Transactions on Medical Imaging, 35(5), 1322-1331. doi:10.1109/tmi.2016.2532122Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. doi:10.1109/5.726791P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, Y. LeCun, Overfeat: integrated recognition, localization and detection using convolutional networks, arXiv:1312.6229 (2013).Dice, L. R. (1945). Measures of the Amount of Ecologic Association Between Species. Ecology, 26(3), 297-302. doi:10.2307/1932409Pollán, M., Llobet, R., Miranda-García, J., Antón, J., Casals, M., Martínez, I., … Salas-Trejo, D. (2013). Validation of DM-Scan, a computer-assisted tool to assess mammographic density in full-field digital mammograms. SpringerPlus, 2(1). doi:10.1186/2193-1801-2-242Llobet, R., Pollán, M., Antón, J., Miranda-García, J., Casals, M., Martínez, I., … Pérez-Cortés, J.-C. (2014). Semi-automated and fully automated mammographic density measurement and breast cancer risk prediction. Computer Methods and Programs in Biomedicine, 116(2), 105-115. doi:10.1016/j.cmpb.2014.01.021He, L., Ren, X., Gao, Q., Zhao, X., Yao, B., & Chao, Y. (2017). The connected-component labeling problem: A review of state-of-the-art algorithms. Pattern Recognition, 70, 25-43. doi:10.1016/j.patcog.2017.04.018Wu, K., Otoo, E., & Suzuki, K. (2008). Optimizing two-pass connected-component labeling algorithms. Pattern Analysis and Applications, 12(2), 117-135. doi:10.1007/s10044-008-0109-yShen, R., Yan, K., Xiao, F., Chang, J., Jiang, C., & Zhou, K. (2018). Automatic Pectoral Muscle Region Segmentation in Mammograms Using Genetic Algorithm and Morphological Selection. Journal of Digital Imaging, 31(5), 680-691. doi:10.1007/s10278-018-0068-9Yin, K., Yan, S., Song, C., & Zheng, B. (2018). A robust method for segmenting pectoral muscle in mediolateral oblique (MLO) mammograms. International Journal of Computer Assisted Radiology and Surgery, 14(2), 237-248. doi:10.1007/s11548-018-1867-7James, J. . (2004). The current status of digital mammography. Clinical Radiology, 59(1), 1-10. doi:10.1016/j.crad.2003.08.011Sáez, C., Robles, M., & García-Gómez, J. M. (2016). Stability metrics for multi-source biomedical data based on simplicial projections from probability distribution distances. Statistical Methods in Medical Research, 26(1), 312-336. doi:10.1177/0962280214545122Jain, A. K. (2010). Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 31(8), 651-666. doi:10.1016/j.patrec.2009.09.011Lee, J., & Nishikawa, R. M. (2018). Automated mammographic breast density estimation using a fully convolutional network. Medical Physics, 45(3), 1178-1190. doi:10.1002/mp.12763D.P. Kingma, J. Ba, Adam: a method for stochastic optimization, arXiv:1412.6980 (2014).Lehman, C. D., Yala, A., Schuster, T., Dontchos, B., Bahl, M., Swanson, K., & Barzilay, R. (2019). Mammographic Breast Density Assessment Using Deep Learning: Clinical Implementation. Radiology, 290(1), 52-58. doi:10.1148/radiol.2018180694Bengio, Y., Courville, A., & Vincent, P. (2013). Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828. doi:10.1109/tpami.2013.50Wu, G., Kim, M., Wang, Q., Munsell, B. C., & Shen, D. (2016). Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning. IEEE Transactions on Biomedical Engineering, 63(7), 1505-1516. doi:10.1109/tbme.2015.2496253T.P. Matthews, S. Singh, B. Mombourquette, J. Su, M.P. Shah, S. Pedemonte, A. Long, D. Maffit, J. Gurney, R.M. Hoil, et al., A multi-site study of a breast density deep learning model for full-field digital mammography and digital breast tomosynthesis exams, arXiv:2001.08383 (2020)
    corecore